Manifold Regularized Experimental Design for Active Learning

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Manifold Regularized Multi-Task Learning

Multi-task learning (MTL) has drawn a lot of attentions in machine learning. By training multiple tasks simultaneously, information can be better shared across tasks. This leads to significant performance improvement in many problems. However, most existing methods assume that all tasks are related or their relationship follows a simple and specified structure. In this paper, we propose a novel...

متن کامل

Manifold Identification for Regularized Stochastic Online Learning Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning

Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term whose purpose is to induce structure in the solution, the solution often lies on a low-dimensional manifold of parameter space along w...

متن کامل

Manifold Regularized Transfer Distance Metric Learning

The performance of many computer vision and machine learning algorithms are heavily depend on the distance metric between samples. It is necessary to e xploit abundant of side information like pairwise constraints to learn a robust and reliable distance metric. While in real world application, large quantities of labeled data is unavailable due to the high labeling cost. Transfer distance metri...

متن کامل

Regularized algorithms for ranking, and manifold learning for related tasks

...................................................................................................... ‐ 2 ‐ Table of

متن کامل

Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning

Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a lowdimensional manifold of parameter space along which the regularizer is smooth. (When an l1 regularize...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Image Processing

سال: 2017

ISSN: 1057-7149,1941-0042

DOI: 10.1109/tip.2016.2635440